📚 node [[generalization|generalization]]
Welcome! Nobody has contributed anything to 'generalization|generalization' yet. You can:
  • Write something in the document below!
    • There is at least one public document in every node in the Agora. Whatever you write in it will be integrated and made available for the next visitor to read and edit.
  • Write to the Agora from social media.
    • If you follow Agora bot on a supported platform and include the wikilink [[generalization|generalization]] in a post, the Agora will link it here and optionally integrate your writing.
  • Sign up as a full Agora user.
    • As a full user you will be able to contribute your personal notes and resources directly to this knowledge commons. Some setup required :)
⥅ related node [[generalization]]
⥅ related node [[generalization_curve]]
⥅ node [[generalization]] pulled by Agora

generalization

Go back to the [[AI Glossary]]

Refers to your model's ability to make correct predictions on new, previously unseen data as opposed to the data used to train the model.

⥅ node [[generalization_curve]] pulled by Agora

generalization curve

Go back to the [[AI Glossary]]

A loss curve showing both the training set and the validation set. A generalization curve can help you detect possible overfitting. For example, the following generalization curve suggests overfitting because loss for the validation set ultimately becomes significantly higher than for the training set.

A Cartesian plot in which the y-axis is labeled 'loss' and the x-axis is labeled 'iterations'. Two graphs appear. One graph shows a loss curve for a training set and the other graph shows a loss curve for a validation set. The two curves start off similarly, but the curve for the training set eventually dips far lower than the curve for the validation set.

A graphical representation of a generalisation curve

📖 stoas
⥱ context